Stochastic Variance Reduction Methods for Saddle-Point Problems

نویسندگان

  • Balamurugan Palaniappan
  • Francis R. Bach
چکیده

We consider convex-concave saddle-point problems where the objective functionsmay be split in many components, and extend recent stochastic variance reductionmethods (such as SVRG or SAGA) to provide the first large-scale linearly conver-gent algorithms for this class of problems which are common in machine learning.While the algorithmic extension is straightforward, it comes with challenges andopportunities: (a) the convex minimization analysis does not apply and we usethe notion of monotone operators to prove convergence, showing in particular thatthe same algorithm applies to a larger class of problems, such as variational in-equalities, (b) there are two notions of splits, in terms of functions, or in terms ofpartial derivatives, (c) the split does need to be done with convex-concave terms,(d) non-uniform sampling is key to an efficient algorithm, both in theory and prac-tice, and (e) these incremental algorithms can be easily accelerated using a simpleextension of the “catalyst” framework, leading to an algorithm which is alwayssuperior to accelerated batch algorithms.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance Reduction Methods for Policy Evaluation

Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts states’ longterm value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then ...

متن کامل

Generalized iterative methods for solving double saddle point problem

In this paper, we develop some stationary iterative schemes in block forms for solving double saddle point problem. To this end, we first generalize the Jacobi iterative method and study its convergence under certain condition. Moreover, using a relaxation parameter, the weighted version  of the Jacobi method together with its convergence analysis are considered. Furthermore, we extend a method...

متن کامل

On the parallel solution of dense saddle-point linear systems arising in stochastic programming

We present a novel approach for solving dense saddle-point linear systems in a distributedmemory environment. This work is motivated by an application in stochastic optimization problems with recourse, but the proposed approach can be used for a large family of dense saddle-point systems, in particular those arising in convex programming. Although stochastic optimization problems have many impo...

متن کامل

The parallel solution of dense saddle-point linear systems arising in stochastic programming

We present a novel approach for solving dense saddle-point linear systems in a distributedmemory environment. This work is motivated by an application in stochastic optimization problems with recourse, but the proposed approach can be used for a large family of dense saddle-point systems, in particular those arising in convex programming. Although stochastic optimization problems have many impo...

متن کامل

Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems

We consider convex-concave saddle point problems with a separable structure and non-strongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primal-dual updates, which enables flexible parallel optimization for large-scale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of prim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016